Loss aware post-training quantization

نویسندگان

چکیده

Neural network quantization enables the deployment of large models on resource-constrained devices. Current post-training methods fall short in terms accuracy for INT4 (or lower) but provide reasonable INT8 above). In this work, we study effect structure loss landscape. We show that is flat and separable mild quantization, enabling straightforward to achieve good results. with more aggressive landscape becomes highly non-separable steep curvature, making selection parameters challenging. Armed understanding, design a method quantizes layer jointly, significant improvement over current methods. Reference implementation available at https://github.com/ynahshan/nn-quantization-pytorch/tree/master/lapq .

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Loss-aware Weight Quantization of Deep Networks

The huge size of deep networks hinders their use in small computing devices. In this paper, we consider compressing the network by weight quantization. We extend a recently proposed loss-aware weight binarization scheme to ternarization, with possibly different scaling parameters for the positive and negative weights, and m-bit (where m > 2) quantization. Experiments on feedforward and recurren...

متن کامل

Vector Quantization and the FSCL Training

An artiicial neural network vector quantizer is developed for use in data compression applications such as Digital Video. Two techniques are employed to improve the performance of the encoder. First, Diierential Vector Quantization (DVQ) is used to signiicantly improve edge delity. Second, an adaptive ANN algorithm known as Frequency-Sensitive Competitive Learning is used to develop an frequenc...

متن کامل

Learning Vector Quantization with Training Count (LVQTC)

Kohonen's learning vector quantization (LVQ) is modified by attributing training counters to each neuron, which record its training statistics. During training, this allows for dynamic self-allocation of the neurons to classes. In the classification stage training counters provide an estimate of the reliability of classification of the single neurons, which can be exploited to obtain a substant...

متن کامل

Fuzzy Concepts in Vector Quantization Training

Vector quantization and clustering are two different problems for which similar techniques are used. We analyze some approaches to the synthesis of a vector quantization codebook, and their similarities with corresponding clustering algorithms. We outline the role of fuzzy concepts in the performance of these algorithms, and propose an alternative way to use fuzzy concepts as a modeling tool fo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Machine Learning

سال: 2021

ISSN: ['0885-6125', '1573-0565']

DOI: https://doi.org/10.1007/s10994-021-06053-z